Session 1-B

Scheduling I

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 7 Tue, 11:00 AM — 12:30 PM PDT

A Converse Result on Convergence Time for Opportunistic Wireless Scheduling

Michael Neely (University of Southern California, USA)

7
This paper proves an impossibility result for stochastic network utility maximization for multi-user wireless systems, including multi-access and broadcast systems. Every time slot an access point observes the current channel states and opportunistically selects a vector of transmission rates. Channel state vectors are assumed to be independent and identically distributed with an unknown probability distribution. The goal is to learn to make decisions over time that maximize a concave utility function of the running time average transmission rate of each user. Recently it was shown that a stochastic Frank-Wolfe algorithm converges to utility-optimality with an error of \(O(\log(T)/T)\), where \(T\) is the time the algorithm has been running. An existing \(\Omega(1/T)\) converse is known. The current paper improves the converse to \(\Omega(\log(T)/T)\), which matches the known achievability result. The proof uses a reduction from the opportunistic scheduling problem to a Bernoulli estimation problem. Along the way, it refines a result on Bernoulli estimation.

Is Deadline Oblivious Scheduling Efficient for controlling real-time traffic in cellular downlink systems?

Sherif ElAzzouni, Eylem Ekici and Ness B. Shroff (The Ohio State University, USA)

4
The emergence of bandwidth-intensive latency-critical traffic in 5G Networks, such as Virtual Reality, has motivated interest in wireless resource allocation problems for flows with hard-deadlines. Attempting to solve this problem brings about two challenges: (i) The flow arrival and the channel state are not known to the Base Station (BS) apriori, thus, the allocation decisions need to be made online. (ii) Wireless resource allocation algorithms that attempt to maximize a reward will likely be unfair, causing unacceptable service for some users. We model the problem as an online convex optimization problem. We propose a primal-dual Deadline-Oblivious (DO) algorithm, and show it is approximately 3.6-competitive. Furthermore, we show via simulations that our algorithm tracks the prescient offline solution very closely, significantly outperforming several existing algorithms. In the second part, we impose a stochastic constraint on the allocation, requiring a guarantee that each user achieves a certain timely throughput (amount of traffic delivered within the deadline over a period of time). We propose the Long-term Fair Deadline Oblivious (LFDO) algorithm for that setup. We combine the Lyapunov framework with analysis of online algorithms, to show that LFDO retains the high-performance of DO, while satisfying the long-term stochastic constraints.

INFOCOM 2020 Best Paper: On the Power of Randomization for Scheduling Real-Time Traffic in Wireless Networks

Christos Tsanikidis and Javad Ghaderi (Columbia University, USA)

4
In this paper, we consider the problem of scheduling real-time traffic in wireless networks under a conflict-graph interference model and single-hop traffic. The objective is to guarantee that at least a certain fraction of packets of each link are delivered within their deadlines, which is referred to as delivery ratio. This problem has been studied before under restrictive frame-based traffic models, or greedy maximal scheduling schemes like LDF (Largest-Deficit First) that provide poor delivery ratio for general traffic patterns. In this paper, we pursue a different approach through randomization over the choice of maximal links that can transmit at each time. We design randomized policies in collocated networks, multi-partite networks, and general networks, that can achieve delivery ratios much higher than what is achievable by LDF. Further, our results apply to traffic (arrival and deadline) processes that evolve as positive recurrent Markov Chains. Hence, this work is an improvement with respect to both efficiency and traffic assumptions compared to the past work. We further present extensive simulation results over various traffic patterns and interference graphs to illustrate the gains of our randomized policies over LDF variants.

OST: On-Demand TSCH Scheduling with Traffic-awareness

Seungbeom Jeong and Hyung-Sin Kim (Seoul National University, Korea (South)); Jeongyeup Paek (Chung-Ang University, Korea (South)); Saewoong Bahk (Seoul National University, Korea (South))

4
As the emerging Internet of Things (IoT) devices and applications flourish, demand for reliable and energy-efficient low-power wireless network protocols is surging. For this purpose, IEEE 802.15.4 standardized time-slotted channel hopping (TSCH), a promising and viable link-layer solution that has shown outstanding performance achieving over 99% reliability with low duty-cycles. However, it lacks one thing, flexibility. It is not adaptable to a wide variety of applications with varying traffic load and unpredictable routing topology due to its static timeslot scheduling. To this end, we propose OST, an On-demand Scheduling scheme for TSCH with traffic-awareness. In OST, each node dynamically self-adjusts the frequency of timeslots at run time according to time-varying traffic intensity. Moreover, it features on-demand resource allocation to handle bursty/queued packets in a timely manner. By doing so, OST aims to minimize its energy consumption while guaranteeing reliable packet delivery. We evaluate OST on a large-scale 72-node testbed, demonstrating that it achieves improvement of 60% in reliability and 52% in energy-efficiency compared to the state-of-the-art.

Session Chair

Cong Wang (Old Dominion University)

Session 2-B

Network Optimization I

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 7 Tue, 1:00 PM — 2:30 PM PDT

Communication-Efficient Network-Distributed Optimization with Differential-Coded Compressors

Xin Zhang, Jia Liu and Zhengyuan Zhu (Iowa State University, USA); Elizabeth Serena Bentley (AFRL, USA)

5
Network-distributed optimization has attracted significant attention in recent years due to its ever-increasing applications. However, the classic decentralized gradient descent (DGD) algorithm is communication-inefficient for large-scale and high-dimensional network-distributed optimization problems. To address this challenge, many compressed DGD-based algorithms have been proposed. However, most of the existing works have high complexity and assume compressors with bounded noise power. To overcome these limitations, in this paper, we propose a new differential-coded compressed DGD (DC-DGD) algorithm. The key features of DC-DGD include: i) DC-DGD works with general SNR-constrained compressors, relaxing the bounded noise power assumption; ii) The differential-coded design entails the same convergence rate as the original DGD algorithm; and iii) DC-DGD has the same low-complexity structure as the original DGD due to a self-noise-reduction effect. Moreover, the above features inspire us to develop a hybrid compression scheme that offers a systematic mechanism to minimize the communication cost. Finally, we conduct extensive experiments to verify the efficacy of the proposed DC-DGD and hybrid compressor.

How to Distribute Computation in Networks

Derya Malak (Rensselaer Polytechnic Institute, USA); Alejandro Cohen and Muriel Médard (MIT, USA)

3
We study the function computation problem in a communications network. The rate region for the function computation problem in general topologies is an open problem, and has been considered under certain restrictive assumptions (e.g. tree networks, linear functions, etc.). We are motivated by the fact that in network computation can be as a means to reduce the required communication flow in terms of number of bits transmitted per source symbol and provide a sparse representation or labeling. To understand the limits of computation, we introduce the notion of entropic surjectivity as a measure to determine how surjective the function is. Exploiting Little's law for stationary systems, we later provide a connection between this notion and the proportion of flow (which we call computation processing factor) that requires communications. This connection gives us an understanding of how much a node (in isolation) should compute (or compress) in order to communicate the desired function within the network. Our analysis do not put any assumptions on the network topology and characterizes the functions only via their entropic surjectivity, and provides insight into how to distribute computation depending on the entropic surjectivity of the computation task.

Simple and Fast Distributed Computation of Betweenness Centrality

Pierluigi Crescenzi (Université de Paris, France & University of Florence, Italy); Pierre Fraigniaud (CNRS and Université Paris 7, France); Ami Paz (Faculty of Computer Science - University of Vienna, Austria)

3
Betweenness centrality is a graph parameter that has been successfully applied to network analysis. In the context of computer networks, it was considered for various objectives, ranging from routing to service placement. However, as observed by Maccari et al. [INFOCOM 2018], research on betweenness centrality for improving protocols was hampered by the lack of a usable, fully distributed algorithm for computing this parameter. We resolve this issue by designing an efficient algorithm for computing betweenness centrality, which can be implemented by minimal modifications to any distance-vector routing protocol based on Bellman-Ford. The convergence time of our implementation is shown to be proportional to the diameter of the network.

Systematic Topology Design for Large-Scale Networks: A Unified Framework

Yijia Chang, Xi Huang, Longxiulin Deng and Ziyu Shao (ShanghaiTech University, China); Junshan Zhang (Arizona State University, USA)

4
For modern large-scale networked systems, ranging from cloud to edge computing systems, the topology design has a significant impact on the system performance in terms of scalability, cost, latency, throughput, and fault-tolerance. These performance metrics may conflict with each other and the design criteria often vary across different networks. To date, there has been little theoretic foundation on topology designs from a prescriptive perspective, indicating that the current status quo of the design process is more of an art than a science. In this paper, we advocate a novel unified framework to describe, generate, and analyze the topology design in a systematic fashion. Building on the reverse-engineering of some existing topology designs, we propose a general procedure that can serve as a common language to describe topology design. Based on the procedure, we develop a top-down approach to systematic topology design, providing some general criteria for the procedure and concrete tools based on combinatorial design theory. To validate the approach, we propose a novel topology model, through which we conduct quantitative performance analysis to characterize the trade-offs among performance metrics and generate new topologies with various advantages for different large-scale networks.

Session Chair

Atilla Eryilmaz (Ohio State University)

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.